Research Article
Kornwipa Poonpon, Paiboon Manorom, Wirapong Chansanam
CONT ED TECHNOLOGY, Volume 15, Issue 4, Article No: ep475
ABSTRACT
Automated essay scoring (AES) has become a valuable tool in educational settings, providing efficient and objective evaluations of student essays. However, the majority of AES systems have primarily focused on native English speakers, leaving a critical gap in the evaluation of non-native speakers’ writing skills. This research addresses this gap by exploring the effectiveness of automated essay-scoring methods specifically designed for non-native speakers. The study acknowledges the unique challenges posed by variations in language proficiency, cultural differences, and linguistic complexities when assessing non-native speakers’ writing abilities. This work focuses on the automated student assessment prize and Khon Kaen University academic English language test dataset and presents an approach that leverages variants of the long short-term memory network model to learn features and compare results with the Kappa coefficient. The findings demonstrate that the proposed framework and approach, which involve joint learning of different essay representations, yield significant benefits and achieve results comparable to state-of-the-art deep learning models. These results suggest that the novel text representation proposed in this paper holds promise as a new and effective choice for assessing the writing tasks of non-native speakers. The result of this study can apply to advance educational assessment practices and promote equitable opportunities for language learners worldwide by enhancing the evaluation process for non-native speakers
Keywords: automated essay scoring, non-native speakers, machine learning, long short-term memory network, Thailand
Research Article
Kutay Uzun
CONT ED TECHNOLOGY, Volume 9, Issue 4, pp. 423-436
ABSTRACT
Managing crowded classes in terms of classroom assessment is a difficult task due to the amount of time which needs to be devoted to providing feedback to student products. In this respect, the present study aimed to develop an automated essay scoring environment as a potential means to overcome this problem. Secondarily, the study aimed to test if automatically-given scores would correlate with the scores given by a human rater. A quantitative research design employing a machine learning approach was preferred to meet the aims of the study. The data set to be used for machine learning consisted of 160 scored literary analysis essays written in an English Literature course, each essay analyzing a theme in a given literary work. To train the automated scoring model, LightSide software was used. First, textual features were extracted and filtered. Then, Logistic Regression, SMO, SVO, Logistic Tree and Naïve Bayes text classification algorithms were tested by using 10-Fold Cross-Validation to reach the most accurate model. To see if the scores given by the computer correlated with the scores given by the human rater, Spearman’s Rank Order Correlation Coefficient was calculated. The results showed that none of the algorithms were sufficiently accurate in terms of the scores of the essays within the data set. It was also seen that the scores given by the computer were not significantly correlated with the scores given by the human rater. The findings implied that the size of the data collected in an authentic classroom environment was too small for classification algorithms in terms of automated essay scoring for classroom assessment.
Keywords: Automated essay scoring, Literary analysis essay, Classification algorithms, Machine learning